811 research outputs found
Axion-Dilaton Cosmology and Dark Energy
We discuss a class of flat FRW cosmological models based on D=4 axion-dilaton
gravity universally coupled to cosmological background fluids. In particular,
we investigate the possibility of recurrent acceleration, which was recently
shown to be generically realized in a wide class of axion-dilaton models, but
in absence of cosmological background fluids. We observe that, once we impose
the existence of radiation -and matter- dominated earlier stages of cosmic
evolution, the axion-dilaton dynamics is altered significantly with respect to
the case of pure axion-dilaton gravity. During the matter dominated epoch the
scalar fields remain either frozen, due to the large expansion rate, or enter a
cosmological scaling regime. In both cases, oscillations of the effective
equation of state around the acceleration boundary value are impossible. Models
which enter an oscillatory stage in the low redshift regime, on the other hand,
are disfavored by observations. We also comment on the viability of the
axion-dilaton system as a candidate for dynamical dark energy. In a certain
subclass of models, an intermediate scaling regime is succeeded by eternal
acceleration. We also briefly discuss the issue of dependence on initial
conditions.Comment: 28 pages, 11 figure
Reconfiguration of optical-NFV network architectures based on cloud resource allocation and QoS degradation cost-aware prediction techniques
The high time required for the deployment of cloud resources in Network Function Virtualization network architectures has led to the proposal and investigation of algorithms for predicting trafc or the necessary processing and memory resources. However, it is well known that whatever approach is taken, a prediction error is inevitable. Two types of prediction errors can occur that have a different impact on the increase in network operational costs. In case the predicted values are higher than the real ones, the resource allocation algorithms will allocate more resources than necessary with the consequent introduction of an over-provisioning cost. Conversely, when the predicted values are lower than the real values, the allocation of fewer resources will lead to a degradation of QoS and the introduction of an under-provisioning cost. When over-provisioning and under-provisioning costs are different, most of the prediction algorithms proposed in the literature are not adequate because they are based on minimizing the mean square error or symmetric cost functions. For this reason we propose and investigate a forecasting methodology in which it is introduced an asymmetric cost function capable of weighing the costs of over-provisioning and under-provisioning differently. We have applied the proposed forecasting methodology for resource allocation in a Network
Function Virtualization architectures where the Network Function Virtualization Infrastructure Point-of-Presences are interconnected by an elastic optical network.We have veried a cost savings of 40% compared to solutions that provide a minimization of the mean square error
Proposal and investigation of an artificial intelligence (Ai)-based cloud resource allocation algorithm in network function virtualization architectures
The high time needed to reconfigure cloud resources in Network Function Virtualization network environments has led to the proposal of solutions in which a prediction based-resource allocation is performed. All of them are based on traffic or needed resource prediction with the minimization of symmetric loss functions like Mean Squared Error. When inevitable prediction errors are made, the prediction methodologies are not able to differently weigh positive and negative prediction errors that could impact the total network cost. In fact if the predicted traffic is higher than the real one then an over allocation cost, referred to as over-provisioning cost, will be paid by the network operator; conversely, in the opposite case, Quality of Service degradation cost, referred to as under-provisioning cost, will be due to compensate the users because of the resource under allocation. In this paper we propose and investigate a resource allocation strategy based on a Long Short Term Memory algorithm in which the training operation is based on the minimization of an asymmetric cost function that differently weighs the positive and negative prediction errors and the corresponding over-provisioning and under-provisioning costs. In a typical traffic and network scenario, the proposed solution allows for a cost saving by 30% with respect to the case of solution with symmetric cost function
Antiproton constraints on dark matter annihilations from internal electroweak bremsstrahlung
If the dark matter particle is a Majorana fermion, annihilations into two
fermions and one gauge boson could have, for some choices of the parameters of
the model, a non-negligible cross-section. Using a toy model of leptophilic
dark matter, we calculate the constraints on the annihilation cross-section
into two electrons and one weak gauge boson from the PAMELA measurements of the
cosmic antiproton-to-proton flux ratio. Furthermore, we calculate the maximal
astrophysical boost factor allowed in the Milky Way under the assumption that
the leptophilic dark matter particle is the dominant component of dark matter
in our Universe. These constraints constitute very conservative estimates on
the boost factor for more realistic models where the dark matter particle also
couples to quarks and weak gauge bosons, such as the lightest neutralino which
we also analyze for some concrete benchmark points. The limits on the
astrophysical boost factors presented here could be used to evaluate the
prospects to detect a gamma-ray signal from dark matter annihilations at
currently operating IACTs as well as in the projected CTA.Comment: 32 pages; 13 figure
A new viable region of the inert doublet model
The inert doublet model, a minimal extension of the Standard Model by a
second Higgs doublet, is one of the simplest and most attractive scenarios that
can explain the dark matter. In this paper, we demonstrate the existence of a
new viable region of the inert doublet model featuring dark matter masses
between Mw and about 160 GeV. Along this previously overlooked region of the
parameter space, the correct relic density is obtained thanks to cancellations
between different diagrams contributing to dark matter annihilation into gauge
bosons (W+W- and ZZ). First, we explain how these cancellations come about and
show several examples illustrating the effect of the parameters of the model on
the cancellations themselves and on the predicted relic density. Then, we
perform a full scan of the new viable region and analyze it in detail by
projecting it onto several two-dimensional planes. Finally, the prospects for
the direct and the indirect detection of inert Higgs dark matter within this
new viable region are studied. We find that present direct detection bounds
already rule out a fraction of the new parameter space and that future direct
detection experiments, such as Xenon100, will easily probe the remaining part
in its entirety.Comment: 27 pages, 16 figure
On the Detectability of Galactic Dark Matter Annihilation into Monochromatic Gamma-rays
Monochromatic gamma-rays are thought to be the smoking gun signal for
identifying the dark matter annihilation. However, the flux of monochromatic
gamma-rays is usually suppressed by the virtual quantum effects since dark
matter should be neutral and does not couple with gamma-rays directly. In the
work we study the detection strategy of the monochromatic gamma-rays in a
future space-based detector. The monochromatic gamma-ray flux is calculated by
assuming supersymmetric neutralino as a typical dark matter candidate. We
discuss both the detection focusing on the Galactic center and in a scan mode
which detects gamma-rays from the whole Galactic halo are compared. The
detector performance for the purpose of monochromatic gamma-rays detection,
with different energy and angular resolution, field of view, background
rejection efficiencies, is carefully studied with both analytical and fast
Monte-Carlo method
A Tentative Gamma-Ray Line from Dark Matter Annihilation at the Fermi Large Area Telescope
The observation of a gamma-ray line in the cosmic-ray fluxes would be a
smoking-gun signature for dark matter annihilation or decay in the Universe. We
present an improved search for such signatures in the data of the Fermi Large
Area Telescope (LAT), concentrating on energies between 20 and 300 GeV. Besides
updating to 43 months of data, we use a new data-driven technique to select
optimized target regions depending on the profile of the Galactic dark matter
halo. In regions close to the Galactic center, we find a 4.6 sigma indication
for a gamma-ray line at 130 GeV. When taking into account the look-elsewhere
effect the significance of the observed excess is 3.2 sigma. If interpreted in
terms of dark matter particles annihilating into a photon pair, the
observations imply a dark matter mass of 129.8\pm2.4^{+7}_{-13} GeV and a
partial annihilation cross-section of = 1.27\pm0.32^{+0.18}_{-0.28}
x 10^-27 cm^3 s^-1 when using the Einasto dark matter profile. The evidence for
the signal is based on about 50 photons; it will take a few years of additional
data to clarify its existence.Comment: 23 pages, 9 figures, 3 tables; extended discussion; matches published
versio
New Constraints from PAMELA anti-proton data on Annihilating and Decaying Dark Matter
Recently the PAMELA experiment has released its updated anti-proton flux and
anti-proton to proton flux ratio data up to energies of ~200GeV. With no clear
excess of cosmic ray anti-protons at high energies, one can extend constraints
on the production of anti-protons from dark matter. In this letter, we consider
both the cases of dark matter annihilating and decaying into standard model
particles that produce significant numbers of anti-protons. We provide two sets
of constraints on the annihilation cross-sections/decay lifetimes. In the one
set of constraints we ignore any source of anti-protons other than dark matter,
which give the highest allowed cross-sections/inverse lifetimes. In the other
set we include also anti-protons produced in collisions of cosmic rays with
interstellar medium nuclei, getting tighter but more realistic constraints on
the annihilation cross-sections/decay lifetimes.Comment: 7 pages, 3 figures, 3 table
Robust implications on Dark Matter from the first FERMI sky gamma map
We derive robust model-independent bounds on DM annihilations and decays from
the first year of FERMI gamma-ray observations of the whole sky. These bounds
only have a mild dependence on the DM density profile and allow the following
DM interpretations of the PAMELA and FERMI electron/positron excesses: primary
channels mu+ mu-, mu+ mu-mu+mu- or e+ e- e+ e-. An isothermal-like density
profile is needed for annihilating DM. In all such cases, FERMI gamma spectra
must contain a significant DM component, that may be probed in the future.Comment: 16 pages, 8 figures. Final versio
- …